Question Answering for Machine Reading Evaluation

نویسندگان

  • Álvaro Rodrigo
  • Anselmo Peñas
  • Eduard H. Hovy
  • Emanuele Pianta
چکیده

Question Answering (QA) evaluation potentially provides a way to evaluate systems that attempt to understand texts automatically. Although current QA technologies are still unable to answer complex questions that require deep inference, we believe QA evaluation techniques must be adapted to drive QA research in the direction of deeper understanding of texts. In this paper we propose such evolution by suggesting an evaluation methodology focused on the understanding of individual documents at a deeper level.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overview of QA4MRE at CLEF 2011: Question Answering for Machine Reading Evaluation

This paper describes the Question Answering for Machine Reading (QA4MRE) task at the 2012 Cross Language Evaluation Forum. In the main task, systems answered multiple-choice questions on documents concerned with four different topics. There were also two pilot tasks, Processing Modality and Negation for Machine Reading, and Machine Reading on Biomedical Texts about Alzheimer's disease. This pap...

متن کامل

Question Answering for Machine Reading Evaluation on Romanian and English Languages

This paper describes UAIC’s Question Answering for Machine Reading Evaluation systems participating in the QA4MRE 2011 evaluation task. The system is designed to extract knowledge from large volumes of text and to use this knowledge to answer questions in Romanian and English monolingual tasks. Our systems were built on the architecture of a Question Answering system, customized for this new ta...

متن کامل

Using Anaphora Resolution in a Question Answering System for Machine Reading Evaluation

This paper describes UAIC1’s Question Answering for Machine Reading Evaluation systems participating in the QA4MRE 2013 evaluation task. We submitted two types of runs, both type of runs based on our system from 2012 edition of QA4MRE, and both used anaphora resolution system. Differences come from the fact the textual entailment component was used or not. The results offered by organizer showe...

متن کامل

Reflections on TREC QA

The TREC (later TAC) Question Answering track reinvigorated the question answering research community, fostering extensive research on different question types and finding answers in different kinds of corpora. Parallel evaluations extended the research further to include a variety of languages and media types. In recent years, the TAC QA track evolved into the Knowledge Base Population (KBP) t...

متن کامل

Evaluating Machine Reading Systems through Comprehension Tests

This paper describes a methodology for testing and evaluating the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. The methodology is being used in QA4MRE (QA for Machine Reading Evaluation), one of the labs of CLEF. We report here the conclusions and lessons learned after the first campaign in 2011.

متن کامل

Enhancing a Question Answering System with Textual Entailment for Machine Reading Evaluation

This paper describes UAIC’s Question Answering for Machine Reading Evaluation systems participating in the QA4MRE 2012 evaluation task. We submitted two types of runs, first type of runs based on our system from 2011 edition of QA4MRE, and second type of runs based on Textual Entailment system. For second types of runs, we construct the Text and the Hypothesis, asked by Textual Entailment syste...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010